211 research outputs found

    3 Dimensional Dense Reconstruction: A Review of Algorithms and Dataset

    Full text link
    3D dense reconstruction refers to the process of obtaining the complete shape and texture features of 3D objects from 2D planar images. 3D reconstruction is an important and extensively studied problem, but it is far from being solved. This work systematically introduces classical methods of 3D dense reconstruction based on geometric and optical models, as well as methods based on deep learning. It also introduces datasets for deep learning and the performance and advantages and disadvantages demonstrated by deep learning methods on these datasets.Comment: 16 page

    Deep Causal Learning for Robotic Intelligence

    Full text link
    This invited review discusses causal learning in the context of robotic intelligence. The paper introduced the psychological findings on causal learning in human cognition, then it introduced the traditional statistical solutions on causal discovery and causal inference. The paper reviewed recent deep causal learning algorithms with a focus on their architectures and the benefits of using deep nets and discussed the gap between deep causal learning and the needs of robotic intelligence

    Interpretable NLG for Task-oriented Dialogue Systems with Heterogeneous Rendering Machines

    Full text link
    End-to-end neural networks have achieved promising performances in natural language generation (NLG). However, they are treated as black boxes and lack interpretability. To address this problem, we propose a novel framework, heterogeneous rendering machines (HRM), that interprets how neural generators render an input dialogue act (DA) into an utterance. HRM consists of a renderer set and a mode switcher. The renderer set contains multiple decoders that vary in both structure and functionality. For every generation step, the mode switcher selects an appropriate decoder from the renderer set to generate an item (a word or a phrase). To verify the effectiveness of our method, we have conducted extensive experiments on 5 benchmark datasets. In terms of automatic metrics (e.g., BLEU), our model is competitive with the current state-of-the-art method. The qualitative analysis shows that our model can interpret the rendering process of neural generators well. Human evaluation also confirms the interpretability of our proposed approach.Comment: Accepted as a conference paper at AAAI 202

    On weakly s -normal subgroups of finite groups

    No full text
    Assume that G is a finite group and H is a subgroup of G. We say that H is s-permutably imbedded in G if, for every prime number p that divides |H|, a Sylow p-subgroup of H is also a Sylow p-subgroup of some s-permutable subgroup of G; a subgroup H is s-semipermutable in G if HGp=GpH for any Sylow p-subgroup Gp of G with (p,|H|)=1; a subgroup H is weakly s-normal in G if there are a subnormal subgroup T of G and a subgroup H∗ of H such that G=HT and H⋂T≤H∗, where H∗ is a subgroup of H that is either s-permutably imbedded or s-semipermutable in G. We investigate the influence of weakly s-normal subgroups on the structure of finite groups. Some recent results are generalized and unified.Нехай G — скiнченна група, а H — пiдгрупа G. Будемо говорити, що H є s-переставно вкладеною в G, якщо для будь-якого простого числа p, що дiлить |H|, силовська p-пiдгрупа H є також силовською p-пiдгрупою деякої s-переставної пiдгрупи G; H є s-напiвпереставною в G, якщо HGp=GpH для будь-якої силовської p-пiдгрупи Gp групи G iз (p,|H|)=1; H є слабко s-нормальною в G, якщо iснують субнормальна пiдгрупа T групи G i пiдгрупа H∗ пiдгрупи H такi, що G=HT i H⋂T≤H∗, де H∗ — пiдгрупа H, що є або s-переставно вкладеною, або s-напiвпереставною в G. Дослiджено вплив слабко s-нормальних пiдгруп на будову скiнченних груп. Узагальнено та унiфiковано деякi нещодавнi результати
    corecore